Perceptron

A linear classifier.

Formula

$$ \hat{y} = h(\underline{w}^T\underline{x}) $$

Loss Function: $$ \ell(y, \hat{y}) = \overline{y \hat{y}} = - (y \hat{y}) $$ Update: $$ \begin{aligned} \underline{w}^{(t+1)} &= \underline{w}^{(t)} - \eta \nabla \ell(y, \hat{y}) \\ &= \underline{w}^{(t)} + \eta y\underline{x} \\ &= \underline{w}^{(t)} + y\underline{x} &[\eta = 1] \end{aligned} $$

Validate:

$$ \begin{aligned} \ell(\hat{y}, y) &= - y(\underline{w}^{(t)})^T \cdot \underline{x} > 0 \\ \ell(\hat{y}', y) &= -y(\underline{w}^{(t+1)})^T \cdot \underline{x} \\ &= -y(\underline{w}^{(t)} + \underline{x} \cdot y)^T \cdot \underline{x} \\ &= -y(\underline{w}^{(t)})^T\cdot\underline{x} - ||\underline{x}||^2 \\ &= \ell(\hat{y}, y) - ||\underline{x}||^2 < \ell(\hat{y}, y) \end{aligned} $$

Convergence

Assume $ \{(\underline{x_i}, y_i)\}_{i=1}^{n} $ is linearly separable. Assume $ \max_i ||\underline{x_i}||_2 = 1 $ by normalization.

Suppose there exists a good separate $ \underline{w}^{*} $ s.t.

$$ \begin{aligned} ||\underline{w}^{*}||_2 &= 1 \\ \forall i, y_i (\underline{w}^{*} \cdot \underline{x_i}) &\geq \delta > 0 \end{aligned} $$

Bound¹: $$ \begin{aligned} (\underline{w}^{*} \cdot \underline{w}^{(t+1)}) - (\underline{w}^{*} \cdot \underline{w}^{(t)}) &= \underline{w}^{*} \cdot (\underline{w}^{(t+1)} - \underline{w}^{(t)}) \\ &= \underline{w}^{*} \cdot y_i \cdot \underline{x_i} \\ &= y_i (\underline{w}^{*})^T \underline{x_i} \\ &\geq \delta \\ (\underline{w}^{*} \cdot \underline{w}^{(T)}) &= (\underline{w}^{*} \cdot \underline{w}^{(T)}) - (\underline{w}^{*} \cdot \underline{0}) \\ &= (\underline{w}^{*} \cdot \underline{w}^{(T)}) - (\underline{w}^{*} \cdot \underline{w}^{(0)}) \\ &= \sum_{t=1}^{T} (\underline{w}^{*} \cdot \underline{w}^{(t)}) - (\underline{w}^{*} \cdot \underline{w}^{(t-1)}) \\ &\geq T \cdot \delta \end{aligned} $$

Bound²: $$ \begin{aligned} ||\underline{w}^{(t)}||_2^2 &= ||\underline{w}^{(t-1)} + y_i\underline{x_i}||_2^2 \\ &= ||\underline{w}^{(t-1)}||_2^2 + 2y_i(\underline{w}^{(t-1)})^T\underline{x_i} + ||\underline{x_i}||_2^2 \\ &\leq ||\underline{w}^{(t-1)}||_2^2 + ||\underline{x_i}||_2^2 \\ &\leq ||\underline{w}^{(t-1)}||_2^2 + 1 \\ ||\underline{w}^{(T)}||_2^2 &\leq T \\ ||\underline{w}^{(T)}||_2 &\leq \sqrt{T} \\ \end{aligned} $$

Then: $$ \begin{aligned} 1 \geq \frac{\underline{w}^{*} \cdot \underline{w}^{(T)}}{||\underline{w}^{*}||_2 \cdot ||\underline{w}^{(T)}||_2} &\geq \frac{T \cdot \delta}{\sqrt{T}} = \sqrt{T} \cdot \delta \\ T &\leq \frac{1}{\delta^2} \end{aligned} $$

Thus perceptron learning assures convergence to the good separator in $ \frac{1}{\delta^2} $ steps at most.

Reference

by Jon